Despite significant progress in object categorization, in recent years, a number of important challenges remain; mainly, the ability to learn from limited labeled data and to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot, generalized zero-shot and open set recognition using a unified framework. Specifically, we propose a weighted maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms. Distance constraints ensure that labeled samples are projected closer to their correct prototypes, in the embedding space, than to others. We illustrate that resulting model shows improvements in supervised, zero-shot, generalized zero-shot, and large open set recognition, with up to 310K class vocabulary on Animal with Attributes and ImageNet datasets.
translated by 谷歌翻译
Graph-based change point detection (CPD) play an irreplaceable role in discovering anomalous graphs in the time-varying network. While several techniques have been proposed to detect change points by identifying whether there is a significant difference between the target network and successive previous ones, they neglect the natural evolution of the network. In practice, real-world graphs such as social networks, traffic networks, and rating networks are constantly evolving over time. Considering this problem, we treat the problem as a prediction task and propose a novel CPD method for dynamic graphs via a latent evolution model. Our method focuses on learning the low-dimensional representations of networks and capturing the evolving patterns of these learned latent representations simultaneously. After having the evolving patterns, a prediction of the target network can be achieved. Then, we can detect the change points by comparing the prediction and the actual network by leveraging a trade-off strategy, which balances the importance between the prediction network and the normal graph pattern extracted from previous networks. Intensive experiments conducted on both synthetic and real-world datasets show the effectiveness and superiority of our model.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Transformer-based language models have become the standard approach to solving natural language processing tasks. However, industry adoption usually requires the maximum throughput to comply with certain latency constraints that prevents Transformer models from being used in production. To address this gap, model compression techniques such as quantization and pruning may be used to improve inference efficiency. However, these compression techniques require specialized software to apply and deploy at scale. In this work, we propose a new pipeline for creating and running Fast Transformer models on CPUs, utilizing hardware-aware pruning, knowledge distillation, quantization, and our own Transformer inference runtime engine with optimized kernels for sparse and quantized operators. We demonstrate the efficiency of our pipeline by creating a Fast DistilBERT model showing minimal accuracy loss on the question-answering SQuADv1.1 benchmark, and throughput results under typical production constraints and environments. Our results outperform existing state-of-the-art Neural Magic's DeepSparse runtime performance by up to 50% and up to 4.1x performance speedup over ONNX Runtime. Source code is publicly available at https://github.com/intel/intel-extension-for-transformers.
translated by 谷歌翻译
The peer merit review of research proposals has been the major mechanism for deciding grant awards. However, research proposals have become increasingly interdisciplinary. It has been a longstanding challenge to assign interdisciplinary proposals to appropriate reviewers, so proposals are fairly evaluated. One of the critical steps in reviewer assignment is to generate accurate interdisciplinary topic labels for proposal-reviewer matching. Existing systems mainly collect topic labels manually generated by principal investigators. However, such human-reported labels can be non-accurate, incomplete, labor intensive, and time costly. What role can AI play in developing a fair and precise proposal reviewer assignment system? In this study, we collaborate with the National Science Foundation of China to address the task of automated interdisciplinary topic path detection. For this purpose, we develop a deep Hierarchical Interdisciplinary Research Proposal Classification Network (HIRPCN). Specifically, we first propose a hierarchical transformer to extract the textual semantic information of proposals. We then design an interdisciplinary graph and leverage GNNs for learning representations of each discipline in order to extract interdisciplinary knowledge. After extracting the semantic and interdisciplinary knowledge, we design a level-wise prediction component to fuse the two types of knowledge representations and detect interdisciplinary topic paths for each proposal. We conduct extensive experiments and expert evaluations on three real-world datasets to demonstrate the effectiveness of our proposed model.
translated by 谷歌翻译
最近,基于卷积神经网络(CNN)的合成孔径雷达(SAR)图像的变更检测方法已增加了研究的注意力。但是,现有的基于CNN的方法忽略了多层卷积之间的相互作用,并且涉及的预分类限制了网络优化。为此,我们提出了一个基于注意力的噪声网络,称为Lantnet。特别是,我们设计了一个层注意模块,该模块可以适应不同卷积层的特征。此外,我们设计了一个耐噪声损失函数,可有效抑制嘈杂标签的影响。因此,该模型对预制结果中的嘈杂标签不敏感。三个SAR数据集的实验结果表明,与几种最新方法相比,所提出的Lantnet性能更好。源代码可在https://github.com/summitgao/lantnet上找到
translated by 谷歌翻译
在大多数现实世界中的推荐方案中,多种行为(例如,单击,添加到购物车,采购等)的多类型,这对于学习用户的多方面偏好是有益的。由于多种类型的行为明确表现出依赖性,因此有效地对复杂行为依赖性建模对于多行为预测至关重要。最先进的多行为模型以所有历史互动为输入都没有区别地学习行为依赖性。但是,不同的行为可能反映了用户偏好的不同方面,这意味着某些无关的互动可能会像预测目标行为的声音一样发挥作用。为了解决上述局限性,我们向多行为建议介绍了多功能学习。更具体地说,我们提出了一种新颖的粗到五个知识增强的多功能学习(CKML)框架,以学习不同行为的共享和特定于行为的利益。 CKML引入了两个高级模块,即粗粒兴趣提取(CIE)和细粒度的行为相关性(FBC),它们共同起作用以捕获细粒度的行为依赖性。 CIE使用知识感知信息来提取每个兴趣的初始表示。 FBC结合了动态路由方案,以在兴趣之间进一步分配每个行为。此外,我们使用自我注意机制在兴趣水平上将不同的行为信息相关联。三个现实世界数据集的经验结果验证了我们模型在利用多行为数据方面的有效性和效率。进一步的实验证明了每个模块的有效性以及多行为数据共享和特定建模范式的鲁棒性和优越性。
translated by 谷歌翻译
如今,学习排名(LTR)技术在信息检索系统中无处不在,尤其是在搜索排名应用程序中。通常用于训练排名模型的查询项目相关性标签通常是对人类行为的嘈杂测量,例如产品搜索的产品评级。粗略的测量使地面真理对单个相关标准进行了非唯一的排名。为了解决歧义,希望使用许多相关标准训练模型,从而产生多标签LTR(MLLTR)。此外,它制定了多个目标,这些目标可能同时优化,例如,在产品搜索中,可以根据产品质量和购买可能性来增加收入来培训排名模型。在这项研究中,我们利用了MLLTR问题的多目标优化(MOO)方面,并采用了最近开发的MOO算法来解决它。具体而言,我们建议一个一般框架,可以通过多种方式将标签的信息组合在一起,以有意义地表征目标之间的权衡。我们的框架允许使用任何基于梯度的MOO算法来解决MLLTR问题。我们在两个公开可用的LTR数据集和一个电子商务数据集上测试了提出的框架,以显示其功效。
translated by 谷歌翻译
从侵入性冠状动脉造影(ICA)中准确提取冠状动脉(ICA)在临床决策中对于冠状动脉疾病的诊断和风险分层(CAD)很重要。在这项研究中,我们开发了一种使用深度学习来自动提取冠状动脉腔的方法。方法。提出了一个深度学习模型U-NET 3+,其中包含了全面的跳过连接和深度监督,以自动从ICAS中自动提取冠状动脉。在这个新型的冠状动脉提取框架中采用了转移学习和混合损失功能。结果。使用了一个包含从210名患者获得的616个ICA的数据集。在技​​术评估中,U-NET 3+的骰子得分为0.8942,灵敏度为0.8735,高于U-NET ++(骰子得分:0.8814:0.8814,灵敏度为0.8331)和U-net(骰子分数) :0.8799,灵敏度为0.8305)。结论。我们的研究表明,U-NET 3+优于其他分割框架,用于自动从ICA中提取冠状动脉。该结果表明了临床使用的巨大希望。
translated by 谷歌翻译
神经网络已广泛应用于垃圾邮件和网络钓鱼检测,入侵预防和恶意软件检测等安全应用程序。但是,这种黑盒方法通常在应用中具有不确定性和不良的解释性。此外,神经网络本身通常容易受到对抗攻击的影响。由于这些原因,人们对可信赖和严格的方法有很高的需求来验证神经网络模型的鲁棒性。对抗性的鲁棒性在处理恶意操纵输入时涉及神经网络的可靠性,是安全和机器学习中最热门的主题之一。在这项工作中,我们在神经网络的对抗性鲁棒性验证中调查了现有文献,并在机器学习,安全和软件工程领域收集了39项多元化研究工作。我们系统地分析了它们的方法,包括如何制定鲁棒性,使用哪种验证技术以及每种技术的优势和局限性。我们从正式验证的角度提供分类学,以全面理解该主题。我们根据财产规范,减少问题和推理策略对现有技术进行分类。我们还展示了使用样本模型在现有研究中应用的代表性技术。最后,我们讨论了未来研究的开放问题。
translated by 谷歌翻译